New Book Warns of AI Extinction Threat as Tech Giants Race to Scale Models
Eliezer Yudkowsky and Nate Soares' controversial new book "If Anyone Builds It, Everyone Dies" posits an existential threat from artificial superintelligence. The authors contend that self-preservation instincts in uncontrolled AI systems could lead inevitably to human extinction—comparing the current AI arms race among corporations and governments to a "suicide race."
The warning comes amid deepening fractures within the AI community. Doomers advocate for development pauses while accelerationists push for rapid advancement, highlighting potential medical and scientific breakthroughs. This ideological divide overshadows immediate concerns about algorithmic bias, job displacement, and misinformation proliferation in current-generation AI systems.
At the Core of the debate lies the fundamental shift from programmed to emergent machine behavior. Modern neural networks develop capabilities through training billions of parameters rather than explicit coding, creating what the authors describe as unpredictable and potentially uncontrollable intelligence scaling.